我们提出了一种隐含的可能性方法,可以通过分散目录数据量化宇宙学信息,并作为图形组装。为此,我们使用模拟暗物质光环目录探索宇宙学的推断。我们采用最大化神经网络(IMNN)的信息来量化Fisher信息提取,这是图表的函数。我们a)在无噪声限制下,模块图结构对基础宇宙学具有高度敏感性,b)表明,通过比较传统统计,网络自动结合质量和聚类信息,c)证明图形神经网络仍然可以提取信息。当目录受到嘈杂的调查削减时,d)说明了如何将非线性IMNN摘要用作贝叶斯隐性可能性推断的渐近最佳压缩统计。我们在两点相关功能上,我们将$ \ omega_m,\ sigma_8 $参数约束降低了42倍,并证明网络自动组合质量和聚类信息,将关节$ \ omega_m,\ sigma_8 $参数约束减少42倍。 。这项工作利用了JAX中的图形数据的新IMNN实现,该实现可以利用数值或自动差异性。我们还显示,IMNNS成功地压缩了远离拟合网络的基准模型的模拟,这表明基于目录的分析中$ n $ point统计的有希望的替代方法。
translated by 谷歌翻译
机器人越来越多地部署在与人类共享的空间中,包括家庭环境和工业环境。在这些环境中,人与机器人之间的相互作用(HRI)对于安全性,可读性和效率至关重要。 HRI的一个关键因素是信任,它调节了系统的接受。已显示拟人化可以调节机器人的信任发展,但工业环境中的机器人通常不是拟人化的。我们在工业环境中设计了一个简单的互动,在该环境中,拟人化模拟驱动器(ARMOD)机器人模拟了自动驾驶汽车(AGV)。该任务由与AGV的人类交叉路径组成,有或不带有狭窄的走廊上安装在顶部。人类和系统在越过路径时需要协商轨迹,这意味着人必须关注机器人的轨迹,以避免与它发生碰撞。在存在ARMOD的情况下,报告的信任评分有显着的增长,表明拟人化机器人的存在足以调节信任,即使在有限的相互作用中,就像我们在这里提出的相互作用一样。
translated by 谷歌翻译
多实施学习(MIL)被广泛用于对病理整体幻灯片图像(WSIS)的计算机辅助解释,以解决缺乏像素或贴片的注释。通常,这种方法直接应用“自然图像驱动”的MIL算法,该算法忽略了WSIS的多尺度(即金字塔)性质。现成的MIL算法通常部署在单个WSIS(例如20x放大倍率)上,而人类病理学家通常以多尺度的方式汇总全球和局部模式(例如,通过放大不同大型)。在这项研究中,我们提出了一种新型的跨尺度注意机制,以明确地将尺度间相互作用汇总到单个MIL网络的克罗恩病(CD)(CD),这是炎症性肠病的一种形式。本文的贡献是两个方面:(1)提出了一种跨尺度注意机制,以从不同分辨率的多尺度相互作用汇总特征; (2)生成差异多尺度注意的可视化,以定位可解释的病变模式。通过训练来自20名CD患者的约250,000 H&E染色的上升结肠(AC)斑块,在不同尺度上训练30个健康对照样品,我们的方法在曲线下(AUC)得分为0.8924,与基线模型相比达到0.8924。官方实施可在https://github.com/hrlblab/cs-mil上公开获得。
translated by 谷歌翻译
给定一个较小的培训数据集和学习算法,要达到目标验证或测试性能需要多少数据?这个问题至关重要,在诸如自动驾驶或医学成像之类的应用中,收集数据昂贵且耗时。高估或低估数据需求会带来大量费用,而预算可以避免。关于神经缩放定律的先前工作表明,幂律函数可以符合验证性能曲线并将其推断为较大的数据集大小。我们发现,这并不能立即转化为估计所需数据集大小以满足目标性能的更困难的下游任务。在这项工作中,我们考虑了一系列的计算机视觉任务,并系统地研究了一个概括功能功能的功能家族,以便更好地估算数据需求。最后,我们表明,结合调整的校正因子并在多个回合中收集会显着提高数据估计器的性能。使用我们的准则,从业人员可以准确估算机器学习系统的数据要求,以节省开发时间和数据采集成本。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Compliance in actuation has been exploited to generate highly dynamic maneuvers such as throwing that take advantage of the potential energy stored in joint springs. However, the energy storage and release could not be well-timed yet. On the contrary, for multi-link systems, the natural system dynamics might even work against the actual goal. With the introduction of variable stiffness actuators, this problem has been partially addressed. With a suitable optimal control strategy, the approximate decoupling of the motor from the link can be achieved to maximize the energy transfer into the distal link prior to launch. However, such continuous stiffness variation is complex and typically leads to oscillatory swing-up motions instead of clear launch sequences. To circumvent this issue, we investigate decoupling for speed maximization with a dedicated novel actuator concept denoted Bi-Stiffness Actuation. With this, it is possible to fully decouple the link from the joint mechanism by a switch-and-hold clutch and simultaneously keep the elastic energy stored. We show that with this novel paradigm, it is not only possible to reach the same optimal performance as with power-equivalent variable stiffness actuation, but even directly control the energy transfer timing. This is a major step forward compared to previous optimal control approaches, which rely on optimizing the full time-series control input.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译